The CPU is responsible for executing all workloads on the NFV. Like other resources, the CPU is managed by the kernel. The user-level applications access CPU resources by sending system calls to the kernel. The kernel also receives other system call requests from different processes; memory loads and stores can issue page faults system calls. The primary consumers of CPU resources are threads (also called tasks), which belong to procedures, kernel routines and interrupt routes. The kernel manages the sharing via a CPU scheduler.
There are three thread states: ON-PROC for threads running on a CPU, RUNNABLE for threads that could run but are waiting their turn, and SLEEP for blocked lines on another event, including uninterruptible waits. These can be categorised into two for more accessible analysis, on-CPU referring to ON-PROC, and off-CPU referring to all other states, where the thread is not running on a CPU. Lines leave the CPU in one of two ways: (1) voluntary if they block on I/O, a lock, or asleep, or (2) involuntary if they have exceeded their scheduled allocation of CPU time. When a CPU switches from running one process or thread to another, it switches address spaces and other metadata. This process is called context switching; it also consumes the CPU resources. All these processes, described, in general, consume the CPU time. In addition to the time, another CPU resource used by the methods, kernel routines and interrupts routines is the CPU cache.
There are typically multiple levels of CPU cache, increasing in both size and latency. The caches end with the last-level store (LLC), large (Mbytes) and slower. On a processor with three levels of supplies, the LLC is also the Level 3 cache. Processes are instructions to be interpreted and run by the CPU. This set of instructions is typically loaded from RAM and cached into the CPU cache for faster access. The CPU first checks the lower cache, i.e., L1 cache. If the CPU finds the data, this is called a hit. If the CPU does not see the data, it looks for it in L2 and then L3. If the CPU does not find the data in any memory caches, it can access it from your system memory (RAM). When that happens, it is known as a cache miss. In general, a cache miss means high latency, i.e., the time needed to access data from memory.
The kernel and processor are responsible for mapping the virtual memory to physical memory. For efficiency, memory mappings are created in groups of memory called pages. When an application starts, it begins with a request for memory allocation. In the case that there is no free memory on the heap, the syscall brk() is issued to extend the size of the bank. However, if there is free memory on the heap, a new memory segment is created via the mmap() syscall. Initially, this virtual memory mapping does not have a corresponding physical memory allocation. Therefore when the application tries to access this allocated memory segment, the error called page fault occurs on the MMU. The kernel then handles the page fault, mapping from the virtual to physical memory. The amount of physical memory allocated to a process is called resident set size (RSS). When there is too much memory demand on the system, the kernel page-out daemon (kswapd) may look for memory pages to free. Three types of pages can be released in their order: pages that we read but not modified (backed by disk) these can be immediately rid, pages that have been modified (dirty) these need to be written to disk before they can be freed and pages of application memory (anonymous) these must be stored on a swap device before they can be released. kswapd, a page-out daemon, runs periodically to scan for inactive and active pages with no memory to free. It is woken up when free memory crosses a low threshold and goes back to sleep when it crosses a high threshold. Swapping usually causes applications to run much more slowly.
The file system that applications usually interact with directly and file systems can use caching, read-ahead, buffering, and asynchronous I/O to avoid exposing disk I/O latency to the application. Logical I/O describes requests to the file system. If these requests must be served from the storage devices, they become physical I/O. Not all I/O will; many logical read requests may be returned from the file system cache and never become physical I/O. File systems are accessed via a virtual file system (VFS). It provides operations for reading, writing, opening, closing, etc., which are mapped by file systems to their internal functions. Linux uses multiple caches to improve the performance of storage I/O via the file system. These are Page cache: This contains virtual memory pages and enhances the performance of file and directory I/O. Inode cache, which are data structures used by file systems to describe their stored objects. The directory cache caches mappings from directory entry names to VFS inodes, improving the performance of pathname lookups. The page cache grows to be the largest of all these because it caches the contents of files and includes “dirty” pages that have been modified but not yet written to disk.
Linux exposes rotational magnetic media, flash-based storage, and network storage as storage devices. Therefore, disk I/O refers to I/O operations on these devices. Disk I/O is a common source of performance issues because I/O latency on storage devices is orders of magnitude slower than the nanosecond or microsecond speed of CPU and memory operations. Block I/O refers to device access in blocks. I/O is queued and scheduled in the block layer. The wait time is spent in the block layer scheduler queues and device dispatcher queues from the operating system. Service time is the time from device issue to completion. This may include the time spent waiting in an on-device line. Request time is the overall time from when an I/O was inserted into the OS queues to its completion. The request time matters the most, as that is the time that applications must wait if I/O is synchronous.
Networking is a complex part of the Linux system. It involves many different layers and protocols, including the application, protocol libraries, syscalls, TCP or UDP, IP, and device drivers for the network interface. In general, the Networking system can be broken down into four. The NIC and Device Driver Processing first reads packets from the NIC and puts them into kernel buffers. Besides the NIC and Device driver, this process includes the DMA and particular memory regions on the RAM for storing receive and transmit packets called rings and the NAPI system for poling packets from these rings to the kernel buffers. It also incorporates some early packet processing hooks like XDP and AF\_XDP and can have custom drivers that bypass the kernel (i.e., the following two processes) like DPDK. Following is the Socket processing. This part also includes queuing and different queuing disciplines. It also incorporates some packet processing hooks like TC, Netfilter etc., which can alter the flow of the networking stack. After that is the Protocol processing layer, which applies functions for different IP and transport protocols, both these protocols run under the context of SoftIrq. Lastly is the application process. The application receives and sends packets on the destination socket
A flame graph visualizes a distributed request trace and represents each service call that occurred during the requests execution path with a timed, color-coded, horizontal bar. Flame graphs for distributed traces include error and latency data to help developers identify and fix bottlenecks in their applications..
To first evaluate the performance of Disk I/O, we would want to know the overall block I/O device latency. This refers to the time from issuing a request to the device, to when it completes, including time spent queued in the operating system. While latency can show the overall performance of the Disk I/O, to remediate them, it would be essential to get more details, like the processes performing I/O on disk and the bite-size. In addition to the overall latency request time, it is also necessary to know the time requests were queued in the I/O scheduler in the block layer.
To draw insights on the performance of the networking on the NFV and for the VNF, the initial bit is that we will need to know the number of packets being received and their sizes. After that, we would want to see the latency of the device queue, i.e., the time from when the packets have been pushed into the device layer for sending until the packages are sent out as signalled by NAPI. After that, we would want to know the time spent on the queuing disciplines. Next, we would want to see the latency for IP protocol connections and the process of making the connection. Following this, we would also like to know the lifespan of the kernel buffers that are used to pass packets across the networking stack. This can show the latency within the networking stack. While this shows the latency, it won’t show the packet drops. For this, it would be essential to know the number of packets and allocated size of the socket buffers and their limits. Packets are dropped when the socket limits have been reached.
Memory operations can be frequent; therefore, to reduce overheads, it is important to look at some of the non-frequent events that can give insights into the performance of the memory resource. The relatively infrequent activities are: brk() and mmap() calls, page faults, and page-outs. An important is to know the number of memory requests that result in a new segment on the heap, i.e., requests for mappings. Following, it is beneficial to know the code path responsible for heap extension which can review the portion that resulted in extending the heap. Another import operation is page fault, which results in latency and growth of a process RSS. Likewise, it is important to know the code path responsible for page faults. As the system reclaims memory, we would also want to know the process affected and the latency: the time taken for the reclaim.
Firstly, we would want to characterisation virtual file system operations. This helps us know what the process is spending the most time doing on the filesystem - reads and writes (I/O), creates, opens, and syncs. After that, it is essential to know the size of data in for the read and write operations by the process names. This can assist in diagnosing the process responsible for the degraded filesystem performance in such cases. In the same manner, in the case of frequent VFS open operation, it is essential to know the processes responsible for opening files. While the earlier results can help in understanding the processes, it is also necessary to know the filenames with the most frequently read and written. This at a high level can expose some configuration errors, for example, verbose logging in production. This is the case for the Bind9 VNF in our case. Since sockets are also perceived as filenames, this can also show the frequency of sockets reads and writes. As mentioned earlier, the filesystem uses cache to avoid exposing disk I/O latency; therefore, another critical performance factor to consider is how the stock is performing. Applications are affected mainly by page cache; examining the page cache hit ratio over time can give insights on the NFV configuration tuning needed.